FFNSL: Feed-Forward Neural-Symbolic Learner
نویسندگان
چکیده
Abstract Logic-based machine learning aims to learn general, interpretable knowledge in a data-efficient manner. However, labelled data must be specified structured logical form. To address this limitation, we propose neural-symbolic framework, called Feed-Forward Neural-Symbolic Learner (FFNSL) , that integrates logic-based system capable of from noisy examples, with neural networks, order unstructured data. We demonstrate the generality FFNSL on four classification problems, where different pre-trained network models and systems are integrated sequences images. evaluate robustness our framework by using images subject distributional shifts, for which networks may predict incorrectly high confidence. analyse impact these shifts have accuracy learned run-time performance, comparing tree-based pure approaches. Our experimental results show outperforms baselines more accurate fewer examples.
منابع مشابه
Feed forward neural network entities
Feed Forward Neural Networks (FFNNs) are computational techniques inspired by the physiology of the brain and used in the approximation of general mappings from one nite dimensional space to another. They present a practical application of the theoretical resolution of Hilbert's 13 th problem by Kolmogorov and Lorenz, and have been used with success in a variety of applications. However, as the...
متن کاملSignal Prediction by Layered Feed - Forward Neural Network (RESEARCH NOTE).
In this paper a nonparametric neural network (NN) technique for prediction of future values of a signal based on its past history is presented. This approach bypasses modeling, identification, and parameter estimation phases that are required by conventional parametric techniques. A multi-layer feed forward NN is employed. It develops an internal model of the signal through a training operation...
متن کاملInvestigating redundancy in feed-forward neural classifiers
Ž . In this article we will focus on how we can investigate read visualise the clustering behaviour of neurons during training. This clustering property has already been investigated before, by Annema, Vogtlander and Schmidt. However, we ̈ will present a different approach in visualisation illustrated by experiments performed on two-class problems. q 1997 Elsevier Science B.V.
متن کاملFeed-forward neural networks: a geometrical perspective
The convex hull of any subset o f vertices of an n-dimensional hypercube contains no other vertex of the hypercube. This result permits the application of some theorems of n-dimensional geometry lo digital reed-forward neural networks. Also. the construction Of the convex hull is proposed as an alternative to more traditional learning algorithms. Some preliminary simulation results are reponed.
متن کاملMax-Entropy Feed-Forward Clustering Neural Network
The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Machine Learning
سال: 2023
ISSN: ['0885-6125', '1573-0565']
DOI: https://doi.org/10.1007/s10994-022-06278-6